2,451 research outputs found

    Modifications of the Limited Memory BFGS Algorithm for Large-scale Nonlinear Optimization

    Get PDF
    In this paper we present two new numerical methods for unconstrained large-scale optimization. These methods apply update formulae, which are derived by considering different techniques of approximating the objective function. Theoretical analysis is given to show the advantages of using these update formulae. It is observed that these update formulae can be employed within the framework of limited memory strategy with only a modest increase in the linear algebra cost. Comparative results with limited memory BFGS (L-BFGS) method are presented.</p

    Stability analysis of predator - prey population model with time delay and constant rate of harvesting

    Get PDF
    This paper studies the effect of time delay and harvesting on the dynamics of the predator - prey model with a time delay in the growth rate of the prey equation. The predator and prey are then harvested with constant rates. The constant rates may drive the model to one, two, or none positive equilibrium points. When there exist two positive equilibrium points, one of them is possibly stable. In the case of the constant rates are quite small and the equilibrium point is not stable, an asymptotically stable limit cycle occurs. The result showed that the time delay can induce instability of the stable equilibrium point, Hopf bifurcation and stability switches

    Predicting minimum energy structure of a peptide via a modified potential smoothing kernel

    Get PDF
    A global optimization approach is proposed for finding the global minimum energy configuration of a peptide. First, the original nonsmooth total potential energy function of a peptide, composed using the AMBER model, is transformed to a smoother function (shifted-impulsive transformation) via a procedure performed for each pair potential that constitute the total potential energy function. Then, the Potential Smoothing and Search (PSS) procedure is used to provide the global minimum. Based on this procedure global optimum solution is generated for a synthesis peptide named Compstatin

    Convergence of a positive definite symmetric rank one method with restart

    Get PDF
    The paper investigates convergence properties of a positive definite symmetric rank one method with the line search. The method is applied to find a local minimum of the unconstrained minimization problem, the objective function of which is defined on ℝ n and is assumed to be twice continuously differentiable. The authors show that the method is (n+1)-step q-superlinearly convergent without the assumption of linearly independent iterates. It is only assumed that the Hessian approximations are positive definite and asymptotically bounded. Computational experience shows that the method satisfies well these requirements in practical computations

    Positive-definite memoryless symmetric rank one method for large-scale unconstrained optimization

    Get PDF
    Memoryless quasi-Newton method is exactly the quasi-Newton method for which the approximation to the inverse of Hessian, at each step, is updated from a positive multiple of identity matrix. Hence, its search direction can be computed without the storage of matrices, namely O(n2) storages. in this paper, a memoryless symmetric rank one (SR1) method for solving large-scale unconstrained optimization problems is presented. The basic idea is to incorporate the SR1 update within the framework of the memoryless quasi-method. However, it is well-known that the SR1 update may not preserve positive definiteness even when updated from the positive definite matrix. Therefore, we propose that the memoryless SR1 method is updated from the positive scaled of the identity, in which the scaling factor is derived in such a way to preserve the positive definiteness and improves the condition the scale memoryless SR1 update. Under some standard conditions it is shown that the method is globally and R-linearly convergent. Numerical results show that the memoryless SR1 method is very encouraging

    Scaled memoryless BFGS preconditioned steepest descent method for very large-scale unconstrained optimization

    Get PDF
    A preconditioned steepest descent (SD) method for solving very large (with dimensions up to 106 ) unconstrained optimization problems is presented. The basic idea is to incorpo1 rate the preconditioning technique in the framework of the SD method. The preconditioner, which is also a scaled memoryless BFGS updating matrix is selected despite the oftenly scaling strategy on SD method. Then the scaled memoryless BFGS preconditioned SD direction can be computed without any additional storage compared with a standard scaled SD direction. In very mild conditions it is shown that, for uniformly convex functions, the method is globally and linearly convergent. Numerical results are also given to illustrate the use of such preconditioning within the SD method. Our numerical study shows that the new proposed preconditioned SD method is significantly outperformed the SD method with Oren-Luenberger scaling and the conjugate gradient method, and comparable to the limited memory BFGS method

    Scaled memoryless symmetric rank one method for large-scale optimization.

    Get PDF
    This paper concerns the memoryless quasi-Newton method, that is precisely the quasi-Newton method for which the approximation to the inverse of Hessian, at each step, is updated from the identity matrix. Hence its search direction can be computed without the storage of matrices. In this paper, a scaled memoryless symmetric rank one (SR1) method for solving large-scale unconstrained optimization problems is developed. The basic idea is to incorporate the SR1 update within the framework of the memoryless quasi-Newton method. However, it is well-known that the SR1 update may not preserve positive definiteness even when updated from a positive definite matrix. Therefore we propose the memoryless SR1 method, which is updated from a positive scaled of the identity, where the scaling factor is derived in such a way that positive definiteness of the updating matrices are preserved and at the same time improves the condition of the scaled memoryless SR1 update. Under very mild conditions it is shown that, for strictly convex objective functions, the method is globally convergent with a linear rate of convergence. Numerical results show that the optimally scaled memoryless SR1 method is very encouraging

    A new gradient method via least change secant update

    Get PDF
    The Barzilai–Borwein (BB) gradient method is favourable over the classical steepest descent method both in theory and in real computations. This method takes a ‘fixed’ step size rather than following a set of line search rules to ensure convergence. Along this line, we present a new approach for the two-point approximation to the quasi-Newton equation within the BB framework on the basis of a well-known least change result for the Davidon–Fletcher–Powell update and propose a new gradient method that belongs to the same class of BB gradient method in which the line search procedure is replaced by a fixed step size. Some preliminary numerical results suggest that improvements have been achieved

    Towards large scale unconstrained optimization

    Get PDF
    A large scale unconstrained optimization problem can be formulated when the dimension n is large. The notion of 'large scale' is machine dependent and hence it could be difficult to state a priori when a problem is of large size. However, today an unconstrained problem with 400 or more variables is usually considered a large scale problem. The main difficulty in dealing with large scale problems is the fact that effective algorithms for small scale problems do not necessarily translate into efficient algorithms when applied to solve large scale problems. Therefore in dealing with large scale unconstrained problems with a large number of variables, modifications must be made to the standard implementation of the many existing algorithms for the small scale case. One of the most effective Newton-type methods for solving large-scale problems is the truncated Newton method. This method computes a Newton-type direction by truncating the conjugate Gradient method iterates (inner iterations) whenever a required accuracy is nobtained, thereby the superlinear convergence is guaranteed. Another effective approach to large-scale unconstrained is the limited memory BFGS method. This method satisfies the requirement to solve large-scale problems because the storage of matrices is avoided by storing a number of vector pairs. The symmetric rank one (SR1) update is of the simplest quasi-Newton updates for solving large-scale problems. However a basic disadvantage is that the SR1 update may not preserve the positive definiteness with a positive definiteness approximation. A simple restart procedure for the SR1 method using the standard line search to avoid the loss of positive definiteness will be implemented. The matrix-storage free BFGS (MF-BFGS) method is a method that combines with a restarting strategy to the BFGS method. We also attempt to construct a new matrix-storage free which uses the SR1 update (MF-SR1). The MF-SR1 method is more superior than the MF-BFGS method in some problems. However for other problems the MF-BFGS method is more competitive because of its rapid convergence. The matrix- storage methods can be gread accelerated by means of a simple scaling. Therefore, by a simple scaling on SR1 and BFGS methods, we can improve the methods tremendously

    Adaptive stabilization and control stochastic time-varying systems.

    Get PDF
    We introduce the concept of an adaptive control Lyapunov function for the notion of globally asymptotically stable in probability of stochastic time-varying systems and use the stochastic version of Florchinger’s control law established in Abedi et al. to design an adaptive controller. In this framework the problem of adaptive stabilization of a nonlinear stochastic system is reduced to the problem of nonadaptive stabilization of a modified system
    corecore